23 research outputs found

    The Minimum Rank Problem: a counterexample

    Get PDF
    We provide a counterexample to a recent conjecture that the minimum rank of every sign pattern matrix can be realized by a rational matrix. We use one of the equivalences of the conjecture and some results from projective geometry. As a consequence of the counterexample, we show that there is a graph for which the minimum rank over the reals is strictly smaller than the minimum rank over the rationals. We also make some comments on the minimum rank of sign pattern matrices over different subfields of R\mathbb R.Comment: 4 pages, 1 figur

    High rate locally-correctable and locally-testable codes with sub-polynomial query complexity

    Full text link
    In this work, we construct the first locally-correctable codes (LCCs), and locally-testable codes (LTCs) with constant rate, constant relative distance, and sub-polynomial query complexity. Specifically, we show that there exist binary LCCs and LTCs with block length nn, constant rate (which can even be taken arbitrarily close to 1), constant relative distance, and query complexity exp(O~(logn))\exp(\tilde{O}(\sqrt{\log n})). Previously such codes were known to exist only with Ω(nβ)\Omega(n^{\beta}) query complexity (for constant β>0\beta > 0), and there were several, quite different, constructions known. Our codes are based on a general distance-amplification method of Alon and Luby~\cite{AL96_codes}. We show that this method interacts well with local correctors and testers, and obtain our main results by applying it to suitably constructed LCCs and LTCs in the non-standard regime of \emph{sub-constant relative distance}. Along the way, we also construct LCCs and LTCs over large alphabets, with the same query complexity exp(O~(logn))\exp(\tilde{O}(\sqrt{\log n})), which additionally have the property of approaching the Singleton bound: they have almost the best-possible relationship between their rate and distance. This has the surprising consequence that asking for a large alphabet error-correcting code to further be an LCC or LTC with exp(O~(logn))\exp(\tilde{O}(\sqrt{\log n})) query complexity does not require any sacrifice in terms of rate and distance! Such a result was previously not known for any o(n)o(n) query complexity. Our results on LCCs also immediately give locally-decodable codes (LDCs) with the same parameters

    A Storage-Efficient and Robust Private Information Retrieval Scheme Allowing Few Servers

    Get PDF
    Since the concept of locally decodable codes was introduced by Katz and Trevisan in 2000, it is well-known that information the-oretically secure private information retrieval schemes can be built using locally decodable codes. In this paper, we construct a Byzantine ro-bust PIR scheme using the multiplicity codes introduced by Kopparty et al. Our main contributions are on the one hand to avoid full replica-tion of the database on each server; this significantly reduces the global redundancy. On the other hand, to have a much lower locality in the PIR context than in the LDC context. This shows that there exists two different notions: LDC-locality and PIR-locality. This is made possible by exploiting geometric properties of multiplicity codes

    On list recovery of high-rate tensor codes

    Get PDF
    We continue the study of list recovery properties of high-rate tensor codes, initiated by Hemenway, Ron-Zewi, and Wootters (FOCS’17). In that work it was shown that the tensor product of an efficient (poly-time) high-rate globally list recoverable code is approximately locally list recoverable, as well as globally list recoverable in probabilistic near-linear time. This was used in turn to give the first capacity-achieving list decodable codes with (1) local list decoding algorithms, and with (2) probabilistic near-linear time global list decoding algorithms. This also yielded constant-rate codes approaching the Gilbert-Varshamov bound with probabilistic near-linear time global unique decoding algorithms. In the current work we obtain the following results: 1. The tensor product of an efficient (poly-time) high-rate globally list recoverable code is globally list recoverable in deterministic near-linear time. This yields in turn the first capacity-achieving list decodable codes with deterministic near-linear time global list decoding algorithms. It also gives constant-rate codes approaching the Gilbert-Varshamov bound with deterministic near-linear time global unique decoding algorithms. 2. If the base code is additionally locally correctable, then the tensor product is (genuinely) locally list recoverable. This yields in turn (non-explicit) constant-rate codes approaching the Gilbert- Varshamov bound that are locally correctable with query complexity and running time No(1). This improves over prior work by Gopi et. al. (SODA’17; IEEE Transactions on Information Theory’18) that only gave query complexity N" with rate that is exponentially small in 1/". 3. A nearly-tight combinatorial lower bound on output list size for list recovering high-rate tensor codes. This bound implies in turn a nearly-tight lower bound of N (1/ log logN) on the product of query complexity and output list size for locally list recovering high-rate tensor codes.</p

    On Public Key Encryption from Noisy Codewords

    Get PDF
    Several well-known public key encryption schemes, including those of Alekhnovich (FOCS 2003), Regev (STOC 2005), and Gentry, Peikert and Vaikuntanathan (STOC 2008), rely on the conjectured intractability of inverting noisy linear encodings. These schemes are limited in that they either require the underlying field to grow with the security parameter, or alternatively they can work over the binary field but have a low noise entropy that gives rise to sub-exponential attacks. Motivated by the goal of efficient public key cryptography, we study the possibility of obtaining improved security over the binary field by using different noise distributions. Inspired by an abstract encryption scheme of Micciancio (PKC 2010), we consider an abstract encryption scheme that unifies all the three schemes mentioned above and allows for arbitrary choices of the underlying field and noise distributions. Our main result establishes an unexpected connection between the power of such encryption schemes and additive combinatorics. Concretely, we show that under the ``approximate duality conjecture from additive combinatorics (Ben-Sasson and Zewi, STOC 2011), every instance of the abstract encryption scheme over the binary field can be attacked in time 2O(n)2^{O(\sqrt{n})}, where nn is the maximum of the ciphertext size and the public key size (and where the latter excludes public randomness used for specifying the code). On the flip side, counter examples to the above conjecture (if false) may lead to candidate public key encryption schemes with improved security guarantees. We also show, using a simple argument that relies on agnostic learning of parities (Kalai, Mansour and Verbin, STOC 2008), that any such encryption scheme can be {\em unconditionally} attacked in time 2O(n/logn)2^{O(n/\log n)}, where nn is the ciphertext size. Combining this attack with the security proof of Regev\u27s cryptosystem, we immediately obtain an algorithm that solves the {\em learning parity with noise (LPN)} problem in time 2O(n/loglogn)2^{O(n/\log \log n)} using only n1+ϵn^{1+\epsilon} samples, reproducing the result of Lyubashevsky (Random 2005) in a conceptually different way. Finally, we study the possibility of instantiating the abstract encryption scheme over constant-size rings to yield encryption schemes with no decryption error. We show that over the binary field decryption errors are inherent. On the positive side, building on the construction of matching vector families (Grolmusz, Combinatorica 2000; Efremenko, STOC 2009; Dvir, Gopalan and Yekhanin, FOCS 2010), we suggest plausible candidates for secure instances of the framework over constant-size rings that can offer perfectly correct decryption

    Geometric rank of tensors and subrank of matrix multiplication

    Get PDF
    Motivated by problems in algebraic complexity theory (e.g., matrix multiplication) and extremal combinatorics (e.g., the cap set problem and the sunflower problem), we introduce the geometric rank as a new tool in the study of tensors and hypergraphs. We prove that the geometric rank is an upper bound on the subrank of tensors and the independence number of hypergraphs. We prove that the geometric rank is smaller than the slice rank of Tao, and relate geometric rank to the analytic rank of Gowers and Wolf in an asymptotic fashion. As a first application, we use geometric rank to prove a tight upper bound on the (border) subrank of the matrix multiplication tensors, matching Strassen's well-known lower bound from 1987

    On the Communication Complexity of Read-Once AC^0 Formulae

    No full text
    We study the 2-party randomized communication complexity of read-once AC[superscript 0] formulae. For balanced AND-OR trees T with n inputs and depth d, we show that the communication complexity of the function f[superscript T](x, y) = T(x omicron y) is Omega(n/4[superscript d]) where (x omicron y)[subscript i] is defined so that the resulting tree also has alternating levels of AND and OR gates. For each bit of x, y, the operation omicron is either AND or OR depending on the gate in T to which it is an input. Using this, we show that for general AND-OR trees T with n inputs and depth d, the communication complexity of f[superscript T](x, y) is n/2[superscript Omega(d log d)]. These results generalize classical results on the communication complexity of set-disjointness (where T is an OR -gate) and recent results on the communication complexity of the TRIBES functions (where T is a depth-2 read-once formula). Our techniques build on and extend the information complexity methodology for proving lower bounds on randomized communication complexity. Our analysis for trees of depth d proceeds in two steps: (1) reduction to measuring the information complexity of binary depth-d trees, and (2) proving lower bounds on the information complexity of binary trees. In order to execute this program, we carefully construct input distributions under which both these steps can be carried out simultaneously. We believe the tools we develop will prove useful in further studies of information complexity in particular, and communication complexity in general
    corecore